2Q: A Low Overhead High Performance Buffer Management Replacement Algorithm
نویسندگان
چکیده
In a path-breaking paper last year Pat and Betty O'Neil and Gerhard Weikum proposed a self-tuning improvement to the Least Recently Used (LRU) buuer management algorithmm15]. Their improvement is called LRU/k and advocates giving priority to buuer pages based on the kth most recent access. (The standard LRU algorithm is denoted LRU/1 according to this terminology.) If P1's kth most recent access is more more recent than P2's, then P1 will be replaced after P2. Intuitively, LRU/k for k > 1 is a good strategy , because it gives low priority to pages that have been scanned or to pages that belong to a big randomly accessed le (e.g., the account le in TPC/A). They found that LRU/2 achieves most of the advantage of their method. The one problem of LRU/2 is the processor overhead to implement it. In contrast to LRU, each page access requires log N work to manipulate a priority queue where N is the number of pages in the buuer. Question: is there low overhead way (constant overhead per access as in LRU) to achieve similar page 0 replacement performance to LRU/2? Answer: Yes. Our \Two Queue" algorithm (hereafter 2Q) has constant time overhead, performs as well as LRU/2, and requires no tuning. These results hold for real (DB2 commercial, Swiss bank) traces as well as simulated ones. Based on these experiments, we estimate that 2Q will provide a few percent improvement over LRU without increasing the overhead by more than a constant additive factor. 1 Background Fetching data from disk requires at least a factor of 1000 more time than fetching data from a RAM buuer. For this reason, good use of the buuer can signii-cantly improve the throughput and response time of any data-intensive system. Until the early 80's, the least recently used buuer replacement algorithm (replace the page that was least recently accessed or used) was the algorithm of choice in nearly all cases. Indeed, the theoretical community blessed it by showing that LRU never replaces more than a factor B as many elements as an optimal clair-voyant algorithm (where B is the size of the buuer) 19]. 1 Factors this large can heavily innuence the behavior of a database system, however. Furthermore , database systems usually have access patterns in which LRU performs poorly, as noted by Stone-braker 21], Sacco and Schkolnick 18], and Chou and Dewitt 5]. As a result, there …
منابع مشابه
VLRU: Buffer Management in Client-Server Systems
In a client-server system, when LRU or its variant buffer replacement strategy is used on both the client and the server, the cache performance on the server side is very poor mainly because of pages duplicated in both systems. This paper introduces a server buffer replacement strategy which uses a replaced page-id than a request page-id, for the primary information for its operations. The impo...
متن کاملX 3 : A Low Overhead High Performance Buffer Management Replacement Algorithm
In a path-breaking paper last year Pat and Betty O’Neil and Gerhard Weikum pro posed a self-tuning improvement to the Least Recently Used (LRU) buffer management algorithm[l5]. Their improvement is called LRU/k and advocates giving priority to buffer pages baaed on the kth most recent access. (The standard LRU algorithm is denoted LRU/l according to this terminology.) If Pl’s kth most recent ac...
متن کاملThe Impact of Buffering on Closest Pairs Queries Using R-Trees
In this paper, the most appropriate buffer structure, page replacement policy and buffering scheme for closest pairs queries, where both spatial datasets are stored in R-trees, are investigated. Three buffer structures (i.e. single, hybrid and by levels) over two buffering schemes (i.e. local to each R-tree, and global to the query) using several page replacement algorithms (e.g. FIFO, LRU, 2Q,...
متن کاملImproving Adaptive Replacement Cache (ARC) by Reuse Distance
Buffer caches are used to enhance the performance of file or storage systems by reducing I/O requests to underlying storage media. In particular, multi-level buffer cache hierarchy is commonly deployed on network file systems or storage systems. In this environment, the I/O access pattern on second-level buffer caches of file servers or storage controllers differs from that on upperlevel caches...
متن کاملARC: A Self-Tuning, Low Overhead Replacement Cache
We consider the problem of cache management in a demand paging scenario with uniform page sizes. We propose a new cache management policy, namely, Adaptive Replacement Cache (ARC), that has several advantages. In response to evolving and changing access patterns, ARC dynamically, adaptively, and continually balances between the recency and frequency components in an online and selftuning fashio...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 1994